Claudette’s source

This is the ‘literate’ source code for Claudette. You can view the fully rendered version of the notebook here, or you can clone the git repo and run the interactive notebook in Jupyter. The notebook is converted the Python module claudette/core.py using nbdev. The goal of this source code is to both create the Python module, and also to teach the reader how it is created, without assuming much existing knowledge about Claude’s API.

Most of the time you’ll see that we write some source code first, and then a description or discussion of it afterwards.

Setup

import os
# os.environ['ANTHROPIC_LOG'] = 'debug'

To print every HTTP request and response in full, uncomment the above line. This functionality is provided by Anthropic’s SDK.

Tip

If you’re reading the rendered version of this notebook, you’ll see an “Exported source” collapsible widget below. If you’re reading the source notebook directly, you’ll see #| exports at the top of the cell. These show that this piece of code will be exported into the python module that this notebook creates. No other code will be included – any other code in this notebook is just for demonstration, documentation, and testing.

You can toggle expanding/collapsing the source code of all exported sections by using the </> Code menu in the top right of the rendered notebook page.

Exported source
model_types = {
    # Anthropic
    'claude-opus-4-20250514': 'opus',
    'claude-sonnet-4-20250514': 'sonnet',
    'claude-3-opus-20240229': 'opus-3',
    'claude-3-7-sonnet-20250219': 'sonnet-3-7',
    'claude-3-5-sonnet-20241022': 'sonnet-3-5',
    'claude-3-haiku-20240307': 'haiku-3',
    'claude-3-5-haiku-20241022': 'haiku-3-5',
    # AWS
    'anthropic.claude-3-opus-20240229-v1:0': 'opus',
    'anthropic.claude-3-5-sonnet-20241022-v2:0': 'sonnet',
    'anthropic.claude-3-sonnet-20240229-v1:0': 'sonnet',
    'anthropic.claude-3-haiku-20240307-v1:0': 'haiku',
    # Google
    'claude-3-opus@20240229': 'opus',
    'claude-3-5-sonnet-v2@20241022': 'sonnet',
    'claude-3-sonnet@20240229': 'sonnet',
    'claude-3-haiku@20240307': 'haiku',
}

all_models = list(model_types)
models
['claude-opus-4-20250514',
 'claude-sonnet-4-20250514',
 'claude-3-opus-20240229',
 'claude-3-7-sonnet-20250219',
 'claude-3-5-sonnet-20241022']
Exported source
text_only_models = ('claude-3-5-haiku-20241022',)
Exported source
has_streaming_models = set(all_models)
has_system_prompt_models = set(all_models)
has_temperature_models = set(all_models)
has_extended_thinking_models = {'claude-opus-4-20250514', 'claude-sonnet-4-20250514', 'claude-3-7-sonnet-20250219'}
has_extended_thinking_models
{'claude-3-7-sonnet-20250219',
 'claude-opus-4-20250514',
 'claude-sonnet-4-20250514'}

source

can_use_extended_thinking

 can_use_extended_thinking (m)
Exported source
def can_stream(m): return m in has_streaming_models
def can_set_system_prompt(m): return m in has_system_prompt_models
def can_set_temperature(m): return m in has_temperature_models
def can_use_extended_thinking(m): return m in has_extended_thinking_models

source

can_set_temperature

 can_set_temperature (m)

source

can_set_system_prompt

 can_set_system_prompt (m)

source

can_stream

 can_stream (m)

We include these functions to provide a uniform library interface with cosette since openai models such as o1 do not have many of these capabilities.

assert can_stream('claude-3-5-sonnet-20241022') and can_set_system_prompt('claude-3-5-sonnet-20241022') and can_set_temperature('claude-3-5-sonnet-20241022')

These are the current versions and prices of Anthropic’s models at the time of writing.

model = models[1]; model
'claude-sonnet-4-20250514'

For examples, we’ll use the latest Sonnet, since it’s awesome.

Antropic SDK

cli = Anthropic()

This is what Anthropic’s SDK provides for interacting with Python. To use it, pass it a list of messages, with content and a role. The roles should alternate between user and assistant.

Tip

After the code below you’ll see an indented section with an orange vertical line on the left. This is used to show the result of running the code above. Because the code is running in a Jupyter Notebook, we don’t have to use print to display results, we can just type the expression directly, as we do with r here.

m = {'role': 'user', 'content': "I'm Jeremy"}
r = cli.messages.create(messages=[m], model=model, max_tokens=100)
r

Nice to meet you, Jeremy! How can I help you today?

  • id: msg_013DCRcUimBEF8kpHKygnzQB
  • content: [{'citations': None, 'text': 'Nice to meet you, Jeremy! How can I help you today?', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 17, 'server_tool_use': None, 'service_tier': 'standard'}

Formatting output

That output is pretty long and hard to read, so let’s clean it up. We’ll start by pulling out the Content part of the message. To do that, we’re going to write our first function which will be included to the claudette/core.py module.

Tip

This is the first exported public function or class we’re creating (the previous export was of a variable). In the rendered version of the notebook for these you’ll see 4 things, in this order (unless the symbol starts with a single _, which indicates it’s private):

  • The signature (with the symbol name as a heading, with a horizontal rule above)
  • A table of paramater docs (if provided)
  • The doc string (in italics).
  • The source code (in a collapsible “Exported source” block)

After that, we generally provide a bit more detail on what we’ve created, and why, along with a sample usage.


source

find_block

 find_block (r:collections.abc.Mapping, blk_type:type=<class
             'anthropic.types.text_block.TextBlock'>)

Find the first block of type blk_type in r.content.

Type Default Details
r Mapping The message to look in
blk_type type TextBlock The type of block to find
Exported source
def find_block(r:abc.Mapping, # The message to look in
               blk_type:type=TextBlock  # The type of block to find
              ):
    "Find the first block of type `blk_type` in `r.content`."
    return first(o for o in r.content if isinstance(o,blk_type))

This makes it easier to grab the needed parts of Claude’s responses, which can include multiple pieces of content. By default, we look for the first text block. That will generally have the content we want to display.

find_block(r)
TextBlock(citations=None, text='Nice to meet you, Jeremy! How can I help you today?', type='text')
def contents(r):
    "Helper to get the contents from Claude response `r`."
    blk = find_block(r)
    if not blk and r.content: blk = r.content[0]
    return blk.text.strip() if hasattr(blk,'text') else str(blk)

For display purposes, we often just want to show the text itself.

contents(r)
'Nice to meet you, Jeremy! How can I help you today?'
Exported source
@patch
def _repr_markdown_(self:(Message)):
    det = '\n- '.join(f'{k}: `{v}`' for k,v in self.model_dump().items())
    cts = re.sub(r'\$', '&#36;', contents(self))  # escape `$` for jupyter latex
    return f"""{cts}

<details>

- {det}

</details>"""

Jupyter looks for a _repr_markdown_ method in displayed objects; we add this in order to display just the content text, and collapse full details into a hideable section. Note that patch is from fastcore, and is used to add (or replace) functionality in an existing class. We pass the class(es) that we want to patch as type annotations to self. In this case, _repr_markdown_ is being added to Anthropic’s Message class, so when we display the message now we just see the contents, and the details are hidden away in a collapsible details block.

r

Nice to meet you, Jeremy! How can I help you today?

  • id: msg_013DCRcUimBEF8kpHKygnzQB
  • content: [{'citations': None, 'text': 'Nice to meet you, Jeremy! How can I help you today?', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 17, 'server_tool_use': None, 'service_tier': 'standard'}

One key part of the response is the usage key, which tells us how many tokens we used by returning a Usage object.

We’ll add some helpers to make things a bit cleaner for creating and formatting these objects.

r.usage
In: 10; Out: 17; Cache create: 0; Cache read: 0; Total Tokens: 27; Server tool use (web search requests): 0

source

server_tool_usage

 server_tool_usage (web_search_requests=0)

Little helper to create a server tool usage object

Exported source
def server_tool_usage(web_search_requests=0):
    'Little helper to create a server tool usage object'
    return ServerToolUsage(web_search_requests=web_search_requests)

source

usage

 usage (inp=0, out=0, cache_create=0, cache_read=0,
        server_tool_use=ServerToolUsage(web_search_requests=0))

Slightly more concise version of Usage.

Type Default Details
inp int 0 input tokens
out int 0 Output tokens
cache_create int 0 Cache creation tokens
cache_read int 0 Cache read tokens
server_tool_use ServerToolUsage ServerToolUsage(web_search_requests=0) server tool use
Exported source
def usage(inp=0, # input tokens
          out=0,  # Output tokens
          cache_create=0, # Cache creation tokens
          cache_read=0, # Cache read tokens
          server_tool_use=server_tool_usage() # server tool use
         ):
    'Slightly more concise version of `Usage`.'
    return Usage(input_tokens=inp, output_tokens=out, cache_creation_input_tokens=cache_create,
                 cache_read_input_tokens=cache_read, server_tool_use=server_tool_use)

The constructor provided by Anthropic is rather verbose, so we clean it up a bit, using a lowercase version of the name.

usage(5)
In: 5; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 5; Server tool use (web search requests): 0

source

Usage.total

 Usage.total ()
Exported source
def _dgetattr(o,s,d): 
    "Like getattr, but returns the default if the result is None"
    return getattr(o,s,d) or d

@patch(as_prop=True)
def total(self:Usage): return self.input_tokens+self.output_tokens+_dgetattr(self, "cache_creation_input_tokens",0)+_dgetattr(self, "cache_read_input_tokens",0)

Adding a total property to Usage makes it easier to see how many tokens we’ve used up altogether.

usage(5,1).total
6

source

Usage.__repr__

 Usage.__repr__ ()

Return repr(self).

Exported source
@patch
def __repr__(self:Usage):
    io_toks = f'In: {self.input_tokens}; Out: {self.output_tokens}'
    cache_toks = f'Cache create: {_dgetattr(self, "cache_creation_input_tokens",0)}; Cache read: {_dgetattr(self, "cache_read_input_tokens",0)}'
    server_tool_use = _dgetattr(self, "server_tool_use",server_tool_usage())
    server_tool_use_str = f'Server tool use (web search requests): {server_tool_use.web_search_requests}'
    total_tok = f'Total Tokens: {self.total}'
    return f'{io_toks}; {cache_toks}; {total_tok}; {server_tool_use_str}'

In python, patching __repr__ lets us change how an object is displayed. (More generally, methods starting and ending in __ in Python are called dunder methods, and have some magic behavior – such as, in this case, changing how an object is displayed.) We won’t be directly displaying ServerToolUsage’s, so we can handle its display behavior in the same Usage __repr__

usage(5)
In: 5; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 5; Server tool use (web search requests): 0

source

ServerToolUsage.__add__

 ServerToolUsage.__add__ (b)

Add together each of the server tool use counts

Exported source
@patch
def __add__(self:ServerToolUsage, b):
    "Add together each of the server tool use counts"
    return ServerToolUsage(web_search_requests=self.web_search_requests+b.web_search_requests)

And, patching __add__ lets + work on a ServerToolUsage as well as a Usage object.

server_tool_usage(1) + server_tool_usage(2)
ServerToolUsage(web_search_requests=3)

source

Usage.__add__

 Usage.__add__ (b)

Add together each of input_tokens and output_tokens

Exported source
@patch
def __add__(self:Usage, b):
    "Add together each of `input_tokens` and `output_tokens`"
    return usage(self.input_tokens+b.input_tokens, self.output_tokens+b.output_tokens,
                 _dgetattr(self,'cache_creation_input_tokens',0)+_dgetattr(b,'cache_creation_input_tokens',0),
                 _dgetattr(self,'cache_read_input_tokens',0)+_dgetattr(b,'cache_read_input_tokens',0),
                 _dgetattr(self,'server_tool_use',server_tool_usage())+_dgetattr(b,'server_tool_use',server_tool_usage()))
r.usage+r.usage + usage(server_tool_use=server_tool_usage(1))
In: 20; Out: 34; Cache create: 0; Cache read: 0; Total Tokens: 54; Server tool use (web search requests): 1

Creating messages

Creating correctly formatted dicts from scratch every time isn’t very handy, so we’ll import a couple of helper functions from the msglm library.

Let’s use mk_msg to recreate our msg {'role': 'user', 'content': "I'm Jeremy"} from earlier.

prompt = "I'm Jeremy"
m = mk_msg(prompt)
r = cli.messages.create(messages=[m], model=model, max_tokens=100)
r

Hi Jeremy! Nice to meet you. How are you doing today? Is there anything I can help you with?

  • id: msg_012NpnmWdR81mrRNGmSyoJVP
  • content: [{'citations': None, 'text': 'Hi Jeremy! Nice to meet you. How are you doing today? Is there anything I can help you with?', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 10, 'output_tokens': 26, 'server_tool_use': None, 'service_tier': 'standard'}

We can pass more than just text messages to Claude. As we’ll see later we can also pass images, SDK objects, etc. To handle these different data types we need to pass the type along with our content to Claude.

Here’s an example of a multimodal message containing text and images.

{
    'role': 'user', 
    'content': [
        {'type':'text', 'text':'What is in the image?'},
        {
            'type':'image', 
            'source': {
                'type':'base64', 'media_type':'media_type', 'data': 'data'
            }
        }
    ]
}

mk_msg infers the type automatically and creates the appropriate data structure.

LLMs, don’t actually have state, but instead dialogs are created by passing back all previous prompts and responses every time. With Claude, they always alternate user and assistant. We’ll use mk_msgs from msglm to make it easier to build up these dialog lists.

msgs = mk_msgs([prompt, r, "I forgot my name. Can you remind me please?"]) 
msgs
[{'role': 'user', 'content': "I'm Jeremy"},
 {'role': 'assistant',
  'content': [TextBlock(citations=None, text='Hi Jeremy! Nice to meet you. How are you doing today? Is there anything I can help you with?', type='text')]},
 {'role': 'user', 'content': 'I forgot my name. Can you remind me please?'}]
cli.messages.create(messages=msgs, model=model, max_tokens=200)

Your name is Jeremy - you just introduced yourself to me at the start of our conversation!

  • id: msg_01FAWqmciDafXUNSFTJSsroy
  • content: [{'citations': None, 'text': 'Your name is Jeremy - you just introduced yourself to me at the start of our conversation!', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 50, 'output_tokens': 21, 'server_tool_use': None, 'service_tier': 'standard'}

Client


source

Client

 Client (model, cli=None, log=False, cache=False)

Basic Anthropic messages client.

Exported source
class Client:
    def __init__(self, model, cli=None, log=False, cache=False):
        "Basic Anthropic messages client."
        self.model,self.use = model,usage()
        self.text_only = model in text_only_models
        self.log = [] if log else None
        self.c = (cli or Anthropic(default_headers={'anthropic-beta': 'prompt-caching-2024-07-31'}))
        self.cache = cache

We’ll create a simple Client for Anthropic which tracks usage stores the model to use. We don’t add any methods right away – instead we’ll use patch for that so we can add and document them incrementally.

c = Client(model)
c.use
In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Server tool use (web search requests): 0
Exported source
@patch
def _r(self:Client, r:Message, prefill=''):
    "Store the result of the message and accrue total usage."
    if prefill:
        blk = find_block(r)
        blk.text = prefill + (blk.text or '')
    self.result = r
    self.use += r.usage
    self.stop_reason = r.stop_reason
    self.stop_sequence = r.stop_sequence
    return r

We use a _ prefix on private methods, but we document them here in the interests of literate source code.

_r will be used each time we get a new result, to track usage and also to keep the result available for later.

c._r(r)
c.use
In: 10; Out: 26; Cache create: 0; Cache read: 0; Total Tokens: 36; Server tool use (web search requests): 0

Whereas OpenAI’s models use a stream parameter for streaming, Anthropic’s use a separate method. We implement Anthropic’s approach in a private method, and then use a stream parameter in __call__ for consistency:

Exported source
@patch
def _log(self:Client, final, prefill, msgs, maxtok=None, sp=None, temp=None, stream=None, stop=None, **kwargs):
    self._r(final, prefill)
    if self.log is not None: self.log.append({
        "msgs": msgs, "prefill": prefill, **kwargs,
        "msgs": msgs, "prefill": prefill, "maxtok": maxtok, "sp": sp, "temp": temp, "stream": stream, "stop": stop, **kwargs,
        "result": self.result, "use": self.use, "stop_reason": self.stop_reason, "stop_sequence": self.stop_sequence
    })
    return self.result
Exported source
@patch
def _stream(self:Client, msgs:list, prefill='', **kwargs):
    with self.c.messages.stream(model=self.model, messages=mk_msgs(msgs, cache=self.cache, cache_last_ckpt_only=self.cache), **kwargs) as s:
        if prefill: yield(prefill)
        yield from s.text_stream
        self._log(s.get_final_message(), prefill, msgs, **kwargs)

Claude supports adding an extra assistant message at the end, which contains the prefill – i.e. the text we want Claude to assume the response starts with. However Claude doesn’t actually repeat that in the response, so for convenience we add it.

Exported source
@patch
def _precall(self:Client, msgs, prefill, stop, kwargs):
    pref = [prefill.strip()] if prefill else []
    if not isinstance(msgs,list): msgs = [msgs]
    if stop is not None:
        if not isinstance(stop, (list)): stop = [stop]
        kwargs["stop_sequences"] = stop
    msgs = mk_msgs(msgs+pref, cache=self.cache, cache_last_ckpt_only=self.cache)
    return msgs
@patch
@delegates(messages.Messages.create)
def __call__(self:Client,
             msgs:list, # List of messages in the dialog
             sp='', # The system prompt
             temp=0, # Temperature
             maxtok=4096, # Maximum tokens
             prefill='', # Optional prefill to pass to Claude as start of its response
             stream:bool=False, # Stream response?
             stop=None, # Stop sequence
             **kwargs):
    "Make a call to Claude."
    msgs = self._precall(msgs, prefill, stop, kwargs)
    if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
    res = self.c.messages.create(
        model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
    return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, **kwargs)

Defining __call__ let’s us use an object like a function (i.e it’s callable). We use it as a small wrapper over messages.create. However we’re not exporting this version just yet – we have some additions we’ll make in a moment…

c = Client(model, log=True)
c.use
In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Server tool use (web search requests): 0
c('Hi')

Hello! How are you doing today? Is there anything I can help you with?

  • id: msg_01X1fEz2JkSdssCCkdDSXyLu
  • content: [{'citations': None, 'text': 'Hello! How are you doing today? Is there anything I can help you with?', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 8, 'output_tokens': 20, 'server_tool_use': None, 'service_tier': 'standard'}
c.use
In: 8; Out: 20; Cache create: 0; Cache read: 0; Total Tokens: 28; Server tool use (web search requests): 0

Let’s try out prefill:

q = "Concisely, what is the meaning of life?"
pref = 'According to Douglas Adams,'
c(q, prefill=pref)

According to Douglas Adams,it’s 42.

More seriously, there’s no universal answer. Common perspectives include:

  • Religious: To serve/connect with the divine
  • Humanistic: To create meaning through relationships, growth, and contribution
  • Existentialist: To create your own purpose in an inherently meaningless universe
  • Biological: To survive, reproduce, and evolve

The question itself might be more valuable than any single answer.

  • id: msg_01BaeSoGE4QmzFvuqLoY8u4C
  • content: [{'citations': None, 'text': "According to Douglas Adams,it's 42.\n\nMore seriously, there's no universal answer. Common perspectives include:\n\n- **Religious**: To serve/connect with the divine\n- **Humanistic**: To create meaning through relationships, growth, and contribution\n- **Existentialist**: To create your own purpose in an inherently meaningless universe\n- **Biological**: To survive, reproduce, and evolve\n\nThe question itself might be more valuable than any single answer.", 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 24, 'output_tokens': 98, 'server_tool_use': None, 'service_tier': 'standard'}

We can pass stream=True to stream the response back incrementally:

for o in c('Hi', stream=True): print(o, end='')
Hello! How are you doing today? Is there anything I can help you with?
c.use
In: 40; Out: 138; Cache create: 0; Cache read: 0; Total Tokens: 178; Server tool use (web search requests): 0
for o in c(q, prefill=pref, stream=True): print(o, end='')
According to Douglas Adams,it's 42.

More seriously, there's no universal answer. Common perspectives include:

- **Religious**: To serve/connect with the divine
- **Humanistic**: To create meaning through relationships, growth, and contribution
- **Existentialist**: To create your own purpose in an inherently meaningless universe
- **Biological**: To survive, reproduce, and evolve

The question itself might be more valuable than any single answer.
c.use
In: 64; Out: 236; Cache create: 0; Cache read: 0; Total Tokens: 300; Server tool use (web search requests): 0

Pass a stop seauence if you want claude to stop generating text when it encounters it.

c("Count from 1 to 10", stop="5")

1, 2, 3, 4,

  • id: msg_01Ao3nZpkgzHAzCWJ3QQi3rt
  • content: [{'citations': None, 'text': '1, 2, 3, 4, ', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: stop_sequence
  • stop_sequence: 5
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 15, 'output_tokens': 14, 'server_tool_use': None, 'service_tier': 'standard'}

This also works with streaming, and you can pass more than one stop sequence:

for o in c("Count from 1 to 10", stop=["2", "yellow"], stream=True): print(o, end='')
print(c.stop_reason, c.stop_sequence)
1, stop_sequence 2

You can check the logs:

c.log[-1]
{'msgs': [{'role': 'user', 'content': 'Count from 1 to 10'}],
 'prefill': '',
 'max_tokens': 4096,
 'system': '',
 'temperature': 0,
 'stop_sequences': ['2', 'yellow'],
 'maxtok': None,
 'sp': None,
 'temp': None,
 'stream': None,
 'stop': None,
 'result': Message(id='msg_01QCT84uPf74LYuNataKD1Cz', content=[TextBlock(citations=None, text='1, ', type='text')], model='claude-sonnet-4-20250514', role='assistant', stop_reason='stop_sequence', stop_sequence='2', type='message', usage=In: 15; Out: 5; Cache create: 0; Cache read: 0; Total Tokens: 20; Server tool use (web search requests): 0),
 'use': In: 94; Out: 255; Cache create: 0; Cache read: 0; Total Tokens: 349; Server tool use (web search requests): 0,
 'stop_reason': 'stop_sequence',
 'stop_sequence': '2'}

We’ve shown the token usage but we really care about is pricing. Let’s extract the latest pricing from Anthropic into a pricing dict.


source

get_pricing

 get_pricing (m, u)
Exported source
def get_pricing(m, u):
    return pricing[m][:3] if u.prompt_token_count < 128_000 else pricing[m][3:]

Similarly, let’s get the pricing for the latest server tools:

We’ll patch Usage to enable it compute the cost given pricing.


source

Usage.cost

 Usage.cost (costs:tuple)
Exported source
@patch
def cost(self:Usage, costs:tuple) -> float:
    cache_w, cache_r = _dgetattr(self, "cache_creation_input_tokens",0), _dgetattr(self, "cache_read_input_tokens",0)
    tok_cost = sum([self.input_tokens * costs[0] +  self.output_tokens * costs[1] +  cache_w * costs[2] + cache_r * costs[3]]) / 1e6
    server_tool_use = _dgetattr(self, "server_tool_use",server_tool_usage())
    server_tool_cost = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
    return tok_cost + server_tool_cost

source

Client.cost

 Client.cost ()
Exported source
@patch(as_prop=True)
def cost(self: Client) -> float: return self.use.cost(pricing[model_types[self.model]])

source

get_costs

 get_costs (c)
Exported source
def get_costs(c):
    costs = pricing[model_types[c.model]]
    
    inp_cost = c.use.input_tokens * costs[0] / 1e6
    out_cost = c.use.output_tokens * costs[1] / 1e6

    cache_w = c.use.cache_creation_input_tokens   
    cache_r = c.use.cache_read_input_tokens
    cache_cost = (cache_w * costs[2] + cache_r * costs[3]) / 1e6

    server_tool_use = c.use.server_tool_use
    server_tool_cost = server_tool_use.web_search_requests * server_tool_pricing['web_search_requests'] / 1e3
    return inp_cost, out_cost, cache_cost, cache_w + cache_r, server_tool_cost
Exported source
@patch
def _repr_markdown_(self:Client):
    if not hasattr(self,'result'): return 'No results yet'
    msg = contents(self.result)
    inp_cost, out_cost, cache_cost, cached_toks, server_tool_cost = get_costs(self)
    return f"""{msg}

| Metric | Count | Cost (USD) |
|--------|------:|-----:|
| Input tokens | {self.use.input_tokens:,} | {inp_cost:.6f} |
| Output tokens | {self.use.output_tokens:,} | {out_cost:.6f} |
| Cache tokens | {cached_toks:,} | {cache_cost:.6f} |
| Server tool use | {self.use.server_tool_use.web_search_requests:,} | {server_tool_cost:.6f} |
| **Total** | **{self.use.total:,}** | **${self.cost:.6f}** |"""
c

1,

Metric Count Cost (USD)
Input tokens 94 0.000282
Output tokens 255 0.003825
Cache tokens 0 0.000000
Server tool use 0 0.000000
Total 349 $0.004107

Tool use

Let’s now add tool use (aka function calling).


source

mk_tool_choice

 mk_tool_choice (choose:Union[str,bool,NoneType])

Create a tool_choice dict that’s ‘auto’ if choose is None, ‘any’ if it is True, or ‘tool’ otherwise

print(mk_tool_choice('sums'))
print(mk_tool_choice(True))
print(mk_tool_choice(None))
{'type': 'tool', 'name': 'sums'}
{'type': 'any'}
{'type': 'auto'}

Claude can be forced to use a particular tool, or select from a specific list of tools, or decide for itself when to use a tool. If you want to force a tool (or force choosing from a list), include a tool_choice param with a dict from mk_tool_choice.

For testing, we need a function that Claude can call; we’ll write a simple function that adds numbers together, and will tell us when it’s being called:

from dataclasses import dataclass
@dataclass
class MySum: val:int

def sums(
    a:int,  # First thing to sum
    b:int=1 # Second thing to sum
) -> int: # The sum of the inputs
    "Adds a + b."
    print(f"Finding the sum of {a} and {b}")
    return MySum(a + b)
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
sp = "You are a summing expert."

Claudette can autogenerate a schema thanks to the toolslm library. We’ll force the use of the tool using the function we created earlier.

tools=[get_schema(sums)]
choice = mk_tool_choice('sums')

We’ll start a dialog with Claude now. We’ll store the messages of our dialog in msgs. The first message will be our prompt pr, and we’ll pass our tools schema.

msgs = mk_msgs(pr)
r = c(msgs, sp=sp, tools=tools, tool_choice=choice)
r

ToolUseBlock(id=‘toolu_01S7Vpixf82dRF3TWKo5vsxG’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)

  • id: msg_01RYsZPPZdVGDqZthxYn132E
  • content: [{'id': 'toolu_01S7Vpixf82dRF3TWKo5vsxG', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: tool_use
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 443, 'output_tokens': 53, 'server_tool_use': None, 'service_tier': 'standard'}

When Claude decides that it should use a tool, it passes back a ToolUseBlock with the name of the tool to call, and the params to use.

We don’t want to allow it to call just any possible function (that would be a security disaster!) so we create a namespace – that is, a dictionary of allowable function names to call.

ns = mk_ns(sums)
ns
{'sums': <function __main__.sums(a: int, b: int = 1) -> int>}

source

mk_funcres

 mk_funcres (fc, ns)

Given tool use block fc, get tool result, and create a tool_result response.

Exported source
def mk_funcres(fc, ns):
    "Given tool use block `fc`, get tool result, and create a tool_result response."
    res = call_func(fc.name, fc.input, ns=ns, raise_on_err=False)
    return dict(type="tool_result", tool_use_id=fc.id, content=str(res))

We can now use the function requested by Claude. We look it up in ns, and pass in the provided parameters.

fcs = [o for o in r.content if isinstance(o,ToolUseBlock)]
fcs
[ToolUseBlock(id='toolu_01S7Vpixf82dRF3TWKo5vsxG', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')]
res = [mk_funcres(fc, ns=ns) for fc in fcs]
res
Finding the sum of 604542 and 6458932
[{'type': 'tool_result',
  'tool_use_id': 'toolu_01S7Vpixf82dRF3TWKo5vsxG',
  'content': 'MySum(val=7063474)'}]
def contents(r):
    "Helper to get the contents from Claude response `r`."
    blk = find_block(r)
    if not blk and r.content: blk = r.content[0]
    if hasattr(blk,'text'): return blk.text.strip()
    elif hasattr(blk,'content'): return blk.content.strip()
    return str(blk)

source

mk_toolres

 mk_toolres (r:collections.abc.Mapping,
             ns:Optional[collections.abc.Mapping]=None, obj:Optional=None)

Create a tool_result message from response r.

Type Default Details
r Mapping Tool use request response from Claude
ns Optional None Namespace to search for tools
obj Optional None Class to search for tools
Exported source
def mk_toolres(
    r:abc.Mapping, # Tool use request response from Claude
    ns:Optional[abc.Mapping]=None, # Namespace to search for tools
    obj:Optional=None # Class to search for tools
    ):
    "Create a `tool_result` message from response `r`."
    cts = getattr(r, 'content', [])
    res = [mk_msg(r.model_dump(), role='assistant')]
    if ns is None: ns=globals()
    if obj is not None: ns = mk_ns(obj)
    tcs = [mk_funcres(o, ns) for o in cts if isinstance(o,ToolUseBlock)]
    if tcs: res.append(mk_msg(tcs))
    return res

In order to tell Claude the result of the tool call, we pass back the tool use assistant request and the tool_result response.

tr = mk_toolres(r, ns=ns)
tr
Finding the sum of 604542 and 6458932
[{'role': 'assistant',
  'content': [{'id': 'toolu_01S7Vpixf82dRF3TWKo5vsxG',
    'input': {'a': 604542, 'b': 6458932},
    'name': 'sums',
    'type': 'tool_use'}]},
 {'role': 'user',
  'content': [{'type': 'tool_result',
    'tool_use_id': 'toolu_01S7Vpixf82dRF3TWKo5vsxG',
    'content': 'MySum(val=7063474)'}]}]
msgs
[{'role': 'user', 'content': 'What is 604542+6458932?'}]

We add this to our dialog, and now Claude has all the information it needs to answer our question.

msgs += tr
contents(c(msgs, sp=sp, tools=tools))
'The sum of 604542 + 6458932 is **7,063,474**.'
contents(msgs[-1])
'MySum(val=7063474)'
msgs
[{'role': 'user', 'content': 'What is 604542+6458932?'},
 {'role': 'assistant',
  'content': [{'id': 'toolu_01S7Vpixf82dRF3TWKo5vsxG',
    'input': {'a': 604542, 'b': 6458932},
    'name': 'sums',
    'type': 'tool_use'}]},
 {'role': 'user',
  'content': [{'type': 'tool_result',
    'tool_use_id': 'toolu_01S7Vpixf82dRF3TWKo5vsxG',
    'content': 'MySum(val=7063474)'}]}]

This works with methods as well – in this case, use the object itself for ns:

class Dummy:
    def sums(
        self,
        a:int,  # First thing to sum
        b:int=1 # Second thing to sum
    ) -> int: # The sum of the inputs
        "Adds a + b."
        print(f"Finding the sum of {a} and {b}")
        return a + b
tools = [get_schema(Dummy.sums)]
o = Dummy()
r = c(pr, sp=sp, tools=tools, tool_choice=choice)
tr = mk_toolres(r, obj=o)
msgs += tr
contents(c(msgs, sp=sp, tools=tools))
Finding the sum of 604542 and 6458932
'604542 + 6458932 = 7,063,474'

Text editing

Anthropic also has a special tool type specific to text editing.

tools = [text_editor_conf['sonnet']]
tools
[{'type': 'text_editor_20250429', 'name': 'str_replace_based_edit_tool'}]
pr = 'Could you please explain my _quarto.yml file?'
msgs = [mk_msg(pr)]
r = c(msgs, sp=sp, tools=tools)
find_block(r, ToolUseBlock)
ToolUseBlock(id='toolu_018ZVXXQwenU4h52hFdhqGcy', input={'command': 'view', 'path': '_quarto.yml'}, name='str_replace_based_edit_tool', type='tool_use')

We’ve gone ahead and create a reference implementation that you can directly use from our text_editor module. Or use as reference for creating your own.

ns = mk_ns(str_replace_based_edit_tool)
tr = mk_toolres(r, ns=ns)
msgs += tr
print(contents(c(msgs, sp=sp, tools=tools))[:128])
Great! Let me explain your `_quarto.yml` configuration file section by section:

## Project Configuration
```yaml
project:
  typ

Callable Client


source

get_types

 get_types (msgs)
get_types(msgs)
['text', 'text', 'tool_use', 'tool_result']

source

Client.__call__

 Client.__call__ (msgs:list, sp='', temp=0, maxtok=4096, maxthinktok=0,
                  prefill='', stream:bool=False, stop=None,
                  tools:Optional[list]=None,
                  tool_choice:Optional[dict]=None,
                  metadata:MetadataParam|NotGiven=NOT_GIVEN, service_tier:
                  "Literal['auto','standard_only']|NotGiven"=NOT_GIVEN,
                  stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:Unio
                  n[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN,
                  temperature:float|NotGiven=NOT_GIVEN,
                  thinking:ThinkingConfigParam|NotGiven=NOT_GIVEN,
                  top_k:int|NotGiven=NOT_GIVEN,
                  top_p:float|NotGiven=NOT_GIVEN,
                  extra_headers:Headers|None=None,
                  extra_query:Query|None=None, extra_body:Body|None=None,
                  timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)

Make a call to Claude.

Type Default Details
msgs list List of messages in the dialog
sp str The system prompt
temp int 0 Temperature
maxtok int 4096 Maximum tokens
maxthinktok int 0 Maximum thinking tokens
prefill str Optional prefill to pass to Claude as start of its response
stream bool False Stream response?
stop NoneType None Stop sequence
tools Optional None List of tools to make available to Claude
tool_choice Optional None Optionally force use of some tool
metadata MetadataParam | NotGiven NOT_GIVEN
service_tier Literal[‘auto’, ‘standard_only’] | NotGiven NOT_GIVEN
stop_sequences List[str] | NotGiven NOT_GIVEN
system Union[str, Iterable[TextBlockParam]] | NotGiven NOT_GIVEN
temperature float | NotGiven NOT_GIVEN
thinking ThinkingConfigParam | NotGiven NOT_GIVEN
top_k int | NotGiven NOT_GIVEN
top_p float | NotGiven NOT_GIVEN
extra_headers Optional None Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs.
The extra values given here take precedence over values defined on the client or passed to this method.
extra_query Query | None None
extra_body Body | None None
timeout float | httpx.Timeout | None | NotGiven NOT_GIVEN
Exported source
@patch
@delegates(messages.Messages.create)
def __call__(self:Client,
             msgs:list, # List of messages in the dialog
             sp='', # The system prompt
             temp=0, # Temperature
             maxtok=4096, # Maximum tokens
             maxthinktok=0, # Maximum thinking tokens
             prefill='', # Optional prefill to pass to Claude as start of its response
             stream:bool=False, # Stream response?
             stop=None, # Stop sequence
             tools:Optional[list]=None, # List of tools to make available to Claude
             tool_choice:Optional[dict]=None, # Optionally force use of some tool
             **kwargs):
    "Make a call to Claude."
    if tools: kwargs['tools'] = [get_schema(o) if callable(o) else o for o in listify(tools)]
    if tool_choice: kwargs['tool_choice'] = mk_tool_choice(tool_choice)
    if maxthinktok: 
        kwargs['thinking']={'type':'enabled', 'budget_tokens':maxthinktok} 
        temp=1; prefill=''
    msgs = self._precall(msgs, prefill, stop, kwargs)
    if any(t == 'image' for t in get_types(msgs)): assert not self.text_only, f"Images are not supported by the current model type: {self.model}"
    if stream: return self._stream(msgs, prefill=prefill, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
    res = self.c.messages.create(model=self.model, messages=msgs, max_tokens=maxtok, system=sp, temperature=temp, **kwargs)
    return self._log(res, prefill, msgs, maxtok, sp, temp, stream=stream, stop=stop, **kwargs)
a,b = 604542,6458932
pr = f"What is {a}+{b}?"
sp = "You are a summing expert."
for tools in [sums, [get_schema(sums)]]:
    r = c(pr, sp=sp, tools=tools, tool_choice='sums')
    print(r)
Message(id='msg_01LGAeFxqZ7twHEX4pCBVZrs', content=[ToolUseBlock(id='toolu_01BJu3iCynmPXB8v4ZJLswnb', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')], model='claude-sonnet-4-20250514', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 443; Out: 53; Cache create: 0; Cache read: 0; Total Tokens: 496; Server tool use (web search requests): 0)
Message(id='msg_01VuBhPBop1zqADvPahbWh9m', content=[ToolUseBlock(id='toolu_01XCnbujKTww58dmPXsiymhH', input={'a': 604542, 'b': 6458932}, name='sums', type='tool_use')], model='claude-sonnet-4-20250514', role='assistant', stop_reason='tool_use', stop_sequence=None, type='message', usage=In: 443; Out: 53; Cache create: 0; Cache read: 0; Total Tokens: 496; Server tool use (web search requests): 0)
ns = mk_ns(sums)
tr = mk_toolres(r, ns=ns)
Finding the sum of 604542 and 6458932

source

Client.structured

 Client.structured (msgs:list, tools:Optional[list]=None,
                    obj:Optional=None,
                    ns:Optional[collections.abc.Mapping]=None, sp='',
                    temp=0, maxtok=4096, maxthinktok=0, prefill='',
                    stream:bool=False, stop=None,
                    tool_choice:Optional[dict]=None,
                    metadata:MetadataParam|NotGiven=NOT_GIVEN, service_tie
                    r:"Literal['auto','standard_only']|NotGiven"=NOT_GIVEN
                    , stop_sequences:List[str]|NotGiven=NOT_GIVEN, system:
                    Union[str,Iterable[TextBlockParam]]|NotGiven=NOT_GIVEN
                    , temperature:float|NotGiven=NOT_GIVEN,
                    thinking:ThinkingConfigParam|NotGiven=NOT_GIVEN,
                    top_k:int|NotGiven=NOT_GIVEN,
                    top_p:float|NotGiven=NOT_GIVEN,
                    extra_headers:Headers|None=None,
                    extra_query:Query|None=None,
                    extra_body:Body|None=None,
                    timeout:float|httpx.Timeout|None|NotGiven=NOT_GIVEN)

Return the value of all tool calls (generally used for structured outputs)

Type Default Details
msgs list List of messages in the dialog
tools Optional None List of tools to make available to Claude
obj Optional None Class to search for tools
ns Optional None Namespace to search for tools
sp str The system prompt
temp int 0 Temperature
maxtok int 4096 Maximum tokens
maxthinktok int 0 Maximum thinking tokens
prefill str Optional prefill to pass to Claude as start of its response
stream bool False Stream response?
stop NoneType None Stop sequence
tool_choice Optional None Optionally force use of some tool
metadata MetadataParam | NotGiven NOT_GIVEN
service_tier Literal[‘auto’, ‘standard_only’] | NotGiven NOT_GIVEN
stop_sequences List[str] | NotGiven NOT_GIVEN
system Union[str, Iterable[TextBlockParam]] | NotGiven NOT_GIVEN
temperature float | NotGiven NOT_GIVEN
thinking ThinkingConfigParam | NotGiven NOT_GIVEN
top_k int | NotGiven NOT_GIVEN
top_p float | NotGiven NOT_GIVEN
extra_headers Optional None Use the following arguments if you need to pass additional parameters to the API that aren’t available via kwargs.
The extra values given here take precedence over values defined on the client or passed to this method.
extra_query Query | None None
extra_body Body | None None
timeout float | httpx.Timeout | None | NotGiven NOT_GIVEN
Exported source
@patch
@delegates(Client.__call__)
def structured(self:Client,
               msgs:list, # List of messages in the dialog
               tools:Optional[list]=None, # List of tools to make available to Claude
               obj:Optional=None, # Class to search for tools
               ns:Optional[abc.Mapping]=None, # Namespace to search for tools
               **kwargs):
    "Return the value of all tool calls (generally used for structured outputs)"
    tools = listify(tools)
    res = self(msgs, tools=tools, tool_choice=tools, **kwargs)
    if ns is None: ns=mk_ns(*tools)
    if obj is not None: ns = mk_ns(obj)
    cts = getattr(res, 'content', [])
    tcs = [call_func(o.name, o.input, ns=ns) for o in cts if isinstance(o,ToolUseBlock)]
    return tcs

Anthropic’s API does not support response formats directly, so instead we provide a structured method to use tool calling to achieve the same result. The result of the tool is not passed back to Claude in this case, but instead is returned directly to the user.

c.structured(pr, tools=[sums])
Finding the sum of 604542 and 6458932
[7063474]
c

ToolUseBlock(id=‘toolu_016oQ36iBvsKumKUtY46ZxYg’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)

Metric Count Cost (USD)
Input tokens 9,396 0.028188
Output tokens 2,425 0.036375
Cache tokens 0 0.000000
Server tool use 0 0.000000
Total 11,821 $0.064563

Custom Types with Tools Use

We need to add tool support for custom types too. Let’s test out custom types using a minimal example.

class Book(BasicRepr):
    def __init__(self, title: str, pages: int): store_attr()
    def __repr__(self):
        return f"Book Title : {self.title}\nNumber of Pages : {self.pages}"
Book("War and Peace", 950)
Book Title : War and Peace
Number of Pages : 950
def find_page(book: Book, # The book to find the halfway point of
              percent: int, # Percent of a book to read to, e.g. halfway == 50, 
) -> int:
    "The page number corresponding to `percent` completion of a book"
    return round(book.pages * (percent / 100.0))
get_schema(find_page)
{'name': 'find_page',
 'description': 'The page number corresponding to `percent` completion of a book\n\nReturns:\n- type: integer',
 'input_schema': {'type': 'object',
  'properties': {'book': {'type': 'object',
    'description': 'The book to find the halfway point of',
    '$ref': '#/$defs/Book'},
   'percent': {'type': 'integer',
    'description': 'Percent of a book to read to, e.g. halfway == 50,'}},
  'title': None,
  'required': ['book', 'percent'],
  '$defs': {'Book': {'type': 'object',
    'properties': {'title': {'type': 'string', 'description': ''},
     'pages': {'type': 'integer', 'description': ''}},
    'title': 'Book',
    'required': ['title', 'pages']}}}}
choice = mk_tool_choice('find_page')
choice
{'type': 'tool', 'name': 'find_page'}

Claudette will pack objects as dict, so we’ll transform tool functions with user-defined types into tool functions that accept a dict in lieu of the user-defined type.

First let’s convert a single argument:

_is_builtin decides whether to pass an argument through as-is. Let’s check the argument conversion:

(_is_builtin(int), _is_builtin(Book), _is_builtin(List))
(True, False, True)
(_convert(555, int),
 _convert({"title": "War and Peace", "pages": 923}, Book),
 _convert([1, 2, 3, 4], List))
(555,
 Book Title : War and Peace
 Number of Pages : 923,
 [1, 2, 3, 4])

To apply tool() to a function is to return a new function where the user-defined types are replaced with dictionary inputs.


source

tool

 tool (func)

A function is transformed into a function with dict arguments substituted for user-defined types. Built-in types such as percent here are left untouched.

find_page(book=Book("War and Peace", 950), percent=50)
475
tool(find_page)({"title": "War and Peace", "pages": 950}, percent=50)
475

By passing tools wrapped by tool(), user-defined types now work completes without failing in tool calls.

pr = "How many pages do I have to read to get halfway through my 950 page copy of War and Peace"
tools = tool(find_page)
tools
<function __main__.find_page(book: __main__.Book, percent: int) -> int>
r = c(pr, tools=[tools])
find_block(r, ToolUseBlock)
ToolUseBlock(id='toolu_01L8RLCRck5Dd2szqiaw8V8G', input={'book': {'title': 'War and Peace', 'pages': 950}, 'percent': 50}, name='find_page', type='tool_use')
tr = mk_toolres(r, ns=[tools])
tr
[{'role': 'assistant',
  'content': [{'citations': None,
    'text': "I'll help you find the halfway point of your copy of War and Peace.",
    'type': 'text'},
   {'id': 'toolu_01L8RLCRck5Dd2szqiaw8V8G',
    'input': {'book': {'title': 'War and Peace', 'pages': 950}, 'percent': 50},
    'name': 'find_page',
    'type': 'tool_use'}]},
 {'role': 'user',
  'content': [{'type': 'tool_result',
    'tool_use_id': 'toolu_01L8RLCRck5Dd2szqiaw8V8G',
    'content': '475'}]}]
msgs = [pr]+tr
contents(c(msgs, sp=sp, tools=[tools]))
"To get halfway through your 950-page copy of War and Peace, you need to read to page 475. That means you'll have read 475 pages when you reach the halfway point of the book."

Chat

Rather than manually adding the responses to a dialog, we’ll create a simple Chat class to do that for us, each time we make a request. We’ll also store the system prompt and tools here, to avoid passing them every time.


source

Chat

 Chat (model:Optional[str]=None, cli:Optional[__main__.Client]=None,
       sp='', tools:Optional[list]=None, temp=0,
       cont_pr:Optional[str]=None, cache:bool=False, hist:list=None,
       ns:Optional[collections.abc.Mapping]=None)

Anthropic chat client.

Type Default Details
model Optional None Model to use (leave empty if passing cli)
cli Optional None Client to use (leave empty if passing model)
sp str Optional system prompt
tools Optional None List of tools to make available to Claude
temp int 0 Temperature
cont_pr Optional None User prompt to continue an assistant response
cache bool False Use Claude cache?
hist list None Initialize history
ns Optional None Namespace to search for tools

The class stores the Client that will provide the responses in c, and a history of messages in h.

sp = "Never mention what tools you use."
chat = Chat(model, sp=sp)
chat.c.use, chat.h
(In: 0; Out: 0; Cache create: 0; Cache read: 0; Total Tokens: 0; Server tool use (web search requests): 0,
 [])
chat.c.use.cost(pricing[model_types[chat.c.model]])
0.0

This is clunky. Let’s add cost as a property for the Chat class. It will pass in the appropriate prices for the current model to the usage cost calculator.


source

Chat.cost

 Chat.cost ()
Exported source
@patch(as_prop=True)
def cost(self: Chat) -> float: return self.c.cost
chat.cost
0.0

source

Chat.__call__

 Chat.__call__ (pr=None, temp=None, maxtok=4096, maxthinktok=0,
                stream=False, prefill='', tool_choice:Optional[dict]=None,
                **kw)

Call self as a function.

Type Default Details
pr NoneType None Prompt / message
temp NoneType None Temperature
maxtok int 4096 Maximum tokens
maxthinktok int 0 Maximum thinking tokens
stream bool False Stream response?
prefill str Optional prefill to pass to Claude as start of its response
tool_choice Optional None Optionally force use of some tool
kw VAR_KEYWORD
Exported source
@patch
def _stream(self:Chat, res):
    yield from res
    self.h += mk_toolres(self.c.result, ns=self.tools, obj=self)
Exported source
@patch
def _post_pr(self:Chat, pr, prev_role):
    if pr is None and prev_role == 'assistant':
        if self.cont_pr is None:
            raise ValueError("Prompt must be given after assistant completion, or use `self.cont_pr`.")
        pr = self.cont_pr # No user prompt, keep the chain
    if pr: self.h.append(mk_msg(pr, cache=self.cache))
Exported source
@patch
def _append_pr(self:Chat,
               pr=None,  # Prompt / message
              ):
    prev_role = nested_idx(self.h, -1, 'role') if self.h else 'assistant' # First message should be 'user'
    if pr and prev_role == 'user': self() # already user request pending
    self._post_pr(pr, prev_role)
Exported source
@patch
def __call__(self:Chat,
             pr=None,  # Prompt / message
             temp=None, # Temperature
             maxtok=4096, # Maximum tokens
             maxthinktok=0, # Maximum thinking tokens
             stream=False, # Stream response?
             prefill='', # Optional prefill to pass to Claude as start of its response
             tool_choice:Optional[dict]=None, # Optionally force use of some tool
             **kw):
    if temp is None: temp=self.temp
    self._append_pr(pr)
    res = self.c(self.h, stream=stream, prefill=prefill, sp=self.sp, temp=temp, maxtok=maxtok, maxthinktok=maxthinktok, tools=self.tools, tool_choice=tool_choice,**kw)
    if stream: return self._stream(res)
    self.h += mk_toolres(self.c.result, ns=self.ns)
    return res

The __call__ method just passes the request along to the Client, but rather than just passing in this one prompt, it appends it to the history and passes it all along. As a result, we now have state!

chat = Chat(model, sp=sp)
chat("I'm Jeremy")
chat("What's my name?")

Your name is Jeremy.

  • id: msg_0156kmJs4gbuXu9pv5HgfJSw
  • content: [{'citations': None, 'text': 'Your name is Jeremy.', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 42, 'output_tokens': 8, 'server_tool_use': None, 'service_tier': 'standard'}
chat.use, chat.cost
(In: 59; Out: 25; Cache create: 0; Cache read: 0; Total Tokens: 84; Server tool use (web search requests): 0,
 0.000552)

Let’s try out prefill too:

q = "Concisely, what is the meaning of life?"
pref = 'According to Douglas Adams,'
chat.c.result

Your name is Jeremy.

  • id: msg_0156kmJs4gbuXu9pv5HgfJSw
  • content: [{'citations': None, 'text': 'Your name is Jeremy.', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 42, 'output_tokens': 8, 'server_tool_use': None, 'service_tier': 'standard'}
chat(q, prefill=pref)

According to Douglas Adams,it’s 42. But seriously - to find purpose through connections, growth, and contributing something meaningful to the world around you.

  • id: msg_01RLcuqMLfLuBgM9ZLrH1rsW
  • content: [{'citations': None, 'text': "According to Douglas Adams,it's 42. But seriously - to find purpose through connections, growth, and contributing something meaningful to the world around you.", 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 70, 'output_tokens': 29, 'server_tool_use': None, 'service_tier': 'standard'}

By default messages must be in user, assistant, user format. If this isn’t followed (aka calling chat() without a user message) it will error out:

try: chat()
except ValueError as e: print("Error:", e)
Error: Prompt must be given after assistant completion, or use `self.cont_pr`.

Setting cont_pr allows a “default prompt” to be specified when a prompt isn’t specified. Usually used to prompt the model to continue.

chat.cont_pr = "keep going..."
chat()

The meaning emerges from the search itself - through love, creativity, reducing suffering, and the simple act of being present. It’s less about finding the answer and more about creating meaning through how you choose to live, moment by moment.

  • id: msg_01194twexLGSrpjmau6pLZmU
  • content: [{'citations': None, 'text': "The meaning emerges from the search itself - through love, creativity, reducing suffering, and the simple act of being present. It's less about finding *the* answer and more about creating meaning through how you choose to live, moment by moment.", 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 105, 'output_tokens': 53, 'server_tool_use': None, 'service_tier': 'standard'}

We can also use streaming:

chat = Chat(model, sp=sp)
for o in chat("I'm Jeremy", stream=True): print(o, end='')
Hi Jeremy! Nice to meet you. How are you doing today?
for o in chat(q, prefill=pref, stream=True): print(o, end='')
According to Douglas Adams,it's 42. But seriously - the meaning of life is what you make it. Most people find it through relationships, personal growth, contributing to something bigger than themselves, and experiencing joy and wonder along the way.

You can provide a history of messages to initialise Chat with:

chat = Chat(model, sp=sp, hist=["Can you guess my name?", "Hmmm I really don't know. Is it 'Merlin G. Penfolds'?"])
chat('Wow how did you know?')

I have to admit - I was just making a playful, completely random guess! I actually have no way of knowing your real name since we just started chatting and you haven’t shared any personal information with me.

If that actually is your name, that would be an absolutely incredible coincidence! But I’m guessing you might be playing along with my silly random guess. Either way, nice to meet you! What would you like to talk about?

  • id: msg_01L2Srvk5ERpZ2CEASCmxUrH
  • content: [{'citations': None, 'text': "I have to admit - I was just making a playful, completely random guess! I actually have no way of knowing your real name since we just started chatting and you haven't shared any personal information with me. \n\nIf that actually is your name, that would be an absolutely incredible coincidence! But I'm guessing you might be playing along with my silly random guess. Either way, nice to meet you! What would you like to talk about?", 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 58, 'output_tokens': 97, 'server_tool_use': None, 'service_tier': 'standard'}

Chat tool use

We automagically get streamlined tool use as well:

pr = f"What is {a}+{b}?"
pr
'What is 604542+6458932?'
chat = Chat(model, sp=sp, tools=[sums])
r = chat(pr)
r
Finding the sum of 604542 and 6458932

ToolUseBlock(id=‘toolu_01SRHNwdKtdGo5qJCMHRfqpR’, input={‘a’: 604542, ‘b’: 6458932}, name=‘sums’, type=‘tool_use’)

  • id: msg_01WHBRPYihqEtkVo22KJFdSs
  • content: [{'id': 'toolu_01SRHNwdKtdGo5qJCMHRfqpR', 'input': {'a': 604542, 'b': 6458932}, 'name': 'sums', 'type': 'tool_use'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: tool_use
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 437, 'output_tokens': 72, 'server_tool_use': None, 'service_tier': 'standard'}

Now we need to send this result to Claude—calling the object with no parameters tells it to return the tool result to Claude:

chat()

604542 + 6458932 = 7,063,474

  • id: msg_01JXDmQDKYe79Mr39m1d1V4q
  • content: [{'citations': None, 'text': '604542 + 6458932 = 7,063,474', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 525, 'output_tokens': 19, 'server_tool_use': None, 'service_tier': 'standard'}

It should be correct, because it actually used our Python function to do the addition. Let’s check:

a+b
7063474

Let’s test a function with user defined types.

chat = Chat(model, sp=sp, tools=[find_page])
r = chat("How many pages is three quarters of the way through my 80 page edition of Tao Te Ching?")
r

ToolUseBlock(id=‘toolu_01HRLorqEXorJrA144DN9yhe’, input={‘book’: {‘title’: ‘Tao Te Ching’, ‘pages’: 80}, ‘percent’: 75}, name=‘find_page’, type=‘tool_use’)

  • id: msg_01KDqin37VP2S1bTYKjydwxB
  • content: [{'id': 'toolu_01HRLorqEXorJrA144DN9yhe', 'input': {'book': {'title': 'Tao Te Ching', 'pages': 80}, 'percent': 75}, 'name': 'find_page', 'type': 'tool_use'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: tool_use
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 547, 'output_tokens': 86, 'server_tool_use': None, 'service_tier': 'standard'}

Now we need to send this result to Claude—calling the object with no parameters tells it to return the tool result to Claude:

chat()

Three quarters of the way through your 80-page edition of Tao Te Ching would be page 60.

  • id: msg_01HG7CttafwjTdxLwyj2rpS7
  • content: [{'citations': None, 'text': 'Three quarters of the way through your 80-page edition of Tao Te Ching would be page 60.', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 647, 'output_tokens': 29, 'server_tool_use': None, 'service_tier': 'standard'}

It should be correct, because it actually used our Python function to do the addition. Let’s check:

80 * .75
60.0
Exported source
@patch
def _repr_markdown_(self:Chat):
    if not hasattr(self.c, 'result'): return 'No results yet'
    last_msg = contents(self.c.result)
    
    def fmt_msg(m):
        t = contents(m)
        if isinstance(t, dict): return t['content']
        return t
        
    history = '\n\n'.join(f"**{m['role']}**: {fmt_msg(m)}" 
                         for m in self.h)
    det = self.c._repr_markdown_().split('\n\n')[-1]
    if history: history = f"""
<details>
<summary>► History</summary>

{history}

</details>
"""

    return f"""{last_msg}
{history}
{det}"""
chat

Three quarters of the way through your 80-page edition of Tao Te Ching would be page 60.

► History

user: H

assistant: {‘id’: ‘toolu_01HRLorqEXorJrA144DN9yhe’, ‘input’: {‘book’: {‘title’: ‘Tao Te Ching’, ‘pages’: 80}, ‘percent’: 75}, ‘name’: ‘find_page’, ‘type’: ‘tool_use’}

user: 60

assistant: Three quarters of the way through your 80-page edition of Tao Te Ching would be page 60.

Metric Count Cost (USD)
Input tokens 1,194 0.003582
Output tokens 115 0.001725
Cache tokens 0 0.000000
Server tool use 0 0.000000
Total 1,309 $0.005307
chat = Chat(model, tools=[text_editor_conf['sonnet']], ns=mk_ns(str_replace_editor))

Note that mk_ns(str_replace_editor) is used here. When not providing tools directly as Python functions (like sum), you must create and pass a namespace dictionary (mapping the tool name string to the function object) using the ns parameter to methods like mk_toolres or toolloop. toolslm cannot automatically generate the namespace in this case. For schema-based tools (i.e., Python functions), claudette handles namespace creation automatically.

r = chat('Please explain what my _quarto.yml does. Use your tools')
find_block(r, ToolUseBlock)
ToolUseBlock(id='toolu_01TzxFynrCHxg7BdqUBf8JR7', input={'command': 'view', 'path': '.'}, name='str_replace_editor', type='tool_use')
chat()

Now let me view the _quarto.yml file:

  • id: msg_012iXMHdvn7UrtdZxazRThSK
  • content: [{'citations': None, 'text': 'Now let me view the_quarto.ymlfile:', 'type': 'text'}, {'id': 'toolu_0116CFtXVmpBxHs4T2jyZsno', 'input': {'command': 'view', 'path': '_quarto.yml'}, 'name': 'str_replace_editor', 'type': 'tool_use'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: tool_use
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 22200, 'output_tokens': 91, 'server_tool_use': None, 'service_tier': 'standard'}

Images

Claude can handle image data as well. As everyone knows, when testing image APIs you have to use a cute puppy.

# Image is Cute_dog.jpg from Wikimedia
fn = Path('samples/puppy.jpg')
display.Image(filename=fn, width=200)

img = fn.read_bytes()

Claude expects an image message to have the following structure

{
    'role': 'user', 
    'content': [
        {'type':'text', 'text':'What is in the image?'},
        {
            'type':'image', 
            'source': {
                'type':'base64', 'media_type':'media_type', 'data': 'data'
            }
        }
    ]
}

msglm automatically detects if a message is an image, encodes it, and generates the data structure above. All we need to do is a create a list containing our image and a query and then pass it to mk_msg.

Let’s try it out…

q = "In brief, what color flowers are in this image?"
msg = mk_msg([img, q])
c([msg])

The flowers in this image are purple.

  • id: msg_01HxVQSFK7BAHtRGcd5qP7Xs
  • content: [{'citations': None, 'text': 'The flowers in this image are purple.', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 11, 'server_tool_use': None, 'service_tier': 'standard'}

You don’t need to call mk_msg on each individual message before passing them to the Chat class. Instead you can pass your messages in a list and the Chat class will automatically call mk_msgs in the background.

c(["How are you?", r])

For messages that contain multiple content types (like an image with a question), you’ll need to enclose the message contents in a list as shown below:

c(["How are you?", r, [img, q]])
c = Chat(model)
c([img, q])

The flowers in this image are purple.

  • id: msg_0113qpyXgeZuJTtdkZrpQZBw
  • content: [{'citations': None, 'text': 'The flowers in this image are purple.', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 110, 'output_tokens': 11, 'server_tool_use': None, 'service_tier': 'standard'}
def contents(r):
    "Helper to get the contents from Claude response `r`."
    blk = find_block(r)
    if not blk and r.content: blk = r.content[0]
    if hasattr(blk,'text'): return blk.text.strip()
    elif hasattr(blk,'content'): return blk.content.strip()
    elif hasattr(blk,'source'): return f'*Media Type - {blk.type}*'
    return str(blk)
contents(c.h[0])
'*Media Type - image*'
c

The flowers in this image are purple.

► History

user: Media Type - image

assistant: The flowers in this image are purple.

Metric Count Cost (USD)
Input tokens 110 0.000330
Output tokens 11 0.000165
Cache tokens 0 0.000000
Server tool use 0 0.000000
Total 121 $0.000495
Note

Unfortunately, not all Claude models support images 😞. This table summarizes the capabilities of each Claude model and the different modalities they support.

Caching

Claude supports context caching by adding a cache_control header to the message content.

{
    "role": "user",
    "content": [
        {
            "type": "text", 
            "text": "Please cache my message", 
            "cache_control": {"type": "ephemeral"}
        }
    ]
}

To cache a message, we simply set cache=True when calling mk_msg.

mk_msg(['hi', 'there'], cache=True)
{ 'content': [ {'text': 'hi', 'type': 'text'},
               { 'cache_control': {'type': 'ephemeral'},
                 'text': 'there',
                 'type': 'text'}],
  'role': 'user'}

Claude also now supports smart cache look-ups, so it’s very simple to keep an entire conversation in cache by constantly telling it to update the cache with the latest message. To do this, we just need to set cache=True when creating a Chat.

chat = Chat(model, sp=sp, cache=True)

Caching has a minimum token limit of 1024 tokens for Sonnet and Opus, and 2048 for Haiku. If your conversation is below this limit, it will not be cached.

chat("Hi, I'm Jeremy.")

Hi Jeremy! Nice to meet you. How are you doing today?

  • id: msg_01WwG6cLtPdPwjRMS66XERX7
  • content: [{'citations': None, 'text': 'Hi Jeremy! Nice to meet you. How are you doing today?', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 20, 'output_tokens': 17, 'server_tool_use': None, 'service_tier': 'standard'}

Note the usage: no cache is created, nor used. Now, let’s send a long enough message to trigger caching.

chat("""Lorem ipsum dolor sit amet""" * 150)

I see you’ve sent a very long block of “Lorem ipsum dolor sit amet” repeated many times! Lorem ipsum is that classic placeholder text that designers and typesetters use when they want to focus on layout without being distracted by actual content.

Was this intentional, Jeremy? Are you testing something, or did you perhaps copy and paste more than you meant to? I’m happy to chat about whatever’s on your mind - whether it’s about placeholder text, design, or something completely different!

  • id: msg_01X3DpgFFkKQ614PrPixUgJ8
  • content: [{'citations': None, 'text': 'I see you\'ve sent a very long block of "Lorem ipsum dolor sit amet" repeated many times! Lorem ipsum is that classic placeholder text that designers and typesetters use when they want to focus on layout without being distracted by actual content.\n\nWas this intentional, Jeremy? Are you testing something, or did you perhaps copy and paste more than you meant to? I\'m happy to chat about whatever\'s on your mind - whether it\'s about placeholder text, design, or something completely different!', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 1084, 'cache_read_input_tokens': 0, 'input_tokens': 4, 'output_tokens': 105, 'server_tool_use': None, 'service_tier': 'standard'}

The context is now long enough for cache to be used. All the conversation history has now been written to the temporary cache. Any subsequent message will read from it rather than re-processing the entire conversation history.

chat("Oh thank you! Sorry, my lorem ipsum generator got out of control!")

No worries at all, Jeremy! Those lorem ipsum generators can definitely get a bit enthusiastic sometimes - it’s like they’re trying to fill every possible space with placeholder text!

I’ve seen that happen before where someone means to generate a paragraph or two and suddenly ends up with what looks like the entire works of Cicero repeated endlessly. At least it wasn’t one of those generators that throws in random funny phrases - those can be entertaining but not always appropriate for professional mockups!

Are you working on some kind of design or layout project that needed the placeholder text?

  • id: msg_01Na27AHtq3Fstwum2fQDU9a
  • content: [{'citations': None, 'text': "No worries at all, Jeremy! Those lorem ipsum generators can definitely get a bit enthusiastic sometimes - it's like they're trying to fill every possible space with placeholder text! \n\nI've seen that happen before where someone means to generate a paragraph or two and suddenly ends up with what looks like the entire works of Cicero repeated endlessly. At least it wasn't one of those generators that throws in random funny phrases - those can be entertaining but not always appropriate for professional mockups!\n\nAre you working on some kind of design or layout project that needed the placeholder text?", 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 123, 'cache_read_input_tokens': 1084, 'input_tokens': 4, 'output_tokens': 122, 'server_tool_use': None, 'service_tier': 'standard'}

Extended Thinking

Claude >=3.7 Sonnet & Opus have enhanced reasoning capabilities for complex tasks. See docs for more info.

We can enable extended thinking by passing a thinking param with the following structure.

thinking={
    "type": "enabled",
    "budget_tokens": 16000
}

When extended thinking is enabled a thinking block is included in the response as shown below.

{
  "content": [
    {
      "type": "thinking",
      "thinking": "To approach this, let's think about...",
      "signature": "Imtakcjsu38219c0.eyJoYXNoIjoiYWJjM0NTY3fQ...."
    },
    {
      "type": "text",
      "text": "Yes, there are infinitely many prime numbers such that..."
    }
  ]
}

Let’s add a maxthinktok param to the Client and Chat call methods. When this value is not 0, we’ll pass a thinking param to Claude {"type":"enabled", "budget_tokens":maxthinktok}.

Note: When thinking is enabled prefill must be empty and the temp must be 1.


source

think_md

 think_md (txt, thk)
def contents(r):
    "Helper to get the contents from Claude response `r`."
    blk = find_block(r)
    tk_blk = find_block(r, blk_type=ThinkingBlock)
    if tk_blk: return think_md(blk.text.strip(), tk_blk.thinking.strip())
    if not blk and r.content: blk = r.content[0]
    if hasattr(blk,'text'): return blk.text.strip()
    elif hasattr(blk,'content'): return blk.content.strip()
    elif hasattr(blk,'source'): return f'*Media Type - {blk.type}*'
    return str(blk)

Let’s call the model without extended thinking enabled.

tk_model = first(has_extended_thinking_models)
tk_model
'claude-sonnet-4-20250514'
chat = Chat(tk_model)
chat("Write a sentence about Python!")

Python is a versatile, high-level programming language known for its clean syntax and readability, making it popular for everything from web development and data science to artificial intelligence and automation.

  • id: msg_01Uaszo5nidiL7qdWGgoLdWr
  • content: [{'citations': None, 'text': 'Python is a versatile, high-level programming language known for its clean syntax and readability, making it popular for everything from web development and data science to artificial intelligence and automation.', 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 13, 'output_tokens': 40, 'server_tool_use': None, 'service_tier': 'standard'}

Now, let’s call the model with extended thinking enabled.

chat("Write a sentence about Python!", maxthinktok=1024)

Python’s extensive library ecosystem and beginner-friendly nature have made it one of the most widely-used programming languages for both newcomers learning to code and experienced developers building complex applications.

Thinking The human is asking me to write a sentence about Python again. They might want a different sentence this time, or they could be testing to see if I give the same response. I should provide a different sentence about Python to be more helpful and show variety.
  • id: msg_01AZruBBQow3mJRAhLz358iZ
  • content: [{'signature': 'EqwDCkYIAxgCKkBaKOaWkezErT/XTiPfBnoHKZzpnv4NlrHqSGpnqHCsRQrarH2q/2JQqxA1BMfvcctPnBC7DofM3NbeU0glSbxHEgwO6sSDzrfFjAmIAcEaDDE0Xx90wLyTu+938SIwXJTDcE025ox6c9PdfkCjEDLaq5H1FPtUyQUOwLbmtFZyQp1R08fvmZLZs5hqISTTKpMC78Zo0+N2wpcwCrXepez4jcp6kGQWzl7a9lrdYmPSn3Rpv+MCAsz28CGoLWo5uiwA32T5iQpJ8K9kzZ5VAryc+DKA8cdtkrbo1Zev2XmKhhrz3m0ssbuKTPvULzywVDrcULxbhFRdHPOprZXPdy/42RLED1gYNDPxIlO+8ER2QQmkquB3EU3sVRXH7zjrrAY7N1Mdn7/GntM0FmF4Ojod9tZ+U5ATN/dfGl10Yyl4nMMAAUOLbVmyss622msi0JJU96uRFBO/ZqBDLTlKl8w3YykNvEmjh5pkYodaqEUHmdId1Dh39z6EbbOBOtKLWff3eYgioTiqm07TyLMCCKXKgyjzGiNfZinkZsBdpBrBinC1ELIYAQ==', 'thinking': 'The human is asking me to write a sentence about Python again. They might want a different sentence this time, or they could be testing to see if I give the same response. I should provide a different sentence about Python to be more helpful and show variety.', 'type': 'thinking'}, {'citations': None, 'text': "Python's extensive library ecosystem and beginner-friendly nature have made it one of the most widely-used programming languages for both newcomers learning to code and experienced developers building complex applications.", 'type': 'text'}]
  • model: claude-sonnet-4-20250514
  • role: assistant
  • stop_reason: end_turn
  • stop_sequence: None
  • type: message
  • usage: {'cache_creation_input_tokens': 0, 'cache_read_input_tokens': 0, 'input_tokens': 90, 'output_tokens': 101, 'server_tool_use': None, 'service_tier': 'standard'}

Third party providers

Amazon Bedrock

These are Amazon’s current Claude models:

models_aws
['claude-3-5-haiku-20241022',
 'claude-3-7-sonnet-20250219',
 'anthropic.claude-3-opus-20240229-v1:0',
 'anthropic.claude-3-5-sonnet-20241022-v2:0']
Note

anthropic at version 0.34.2 seems not to install boto3 as a dependency. You may need to do a pip install boto3 or the creation of the Client below fails.

Provided boto3 is installed, we otherwise don’t need any extra code to support Amazon Bedrock – we just have to set up the approach client:

ab = AnthropicBedrock(
    aws_access_key=os.environ['AWS_ACCESS_KEY'],
    aws_secret_key=os.environ['AWS_SECRET_KEY'],
)
client = Client(models_aws[-1], ab)
chat = Chat(cli=client)
chat("I'm Jeremy")

Google Vertex

models_goog
['anthropic.claude-3-sonnet-20240229-v1:0',
 'anthropic.claude-3-haiku-20240307-v1:0',
 'claude-3-opus@20240229',
 'claude-3-5-sonnet-v2@20241022',
 'claude-3-sonnet@20240229',
 'claude-3-haiku@20240307']
from anthropic import AnthropicVertex
import google.auth
project_id = google.auth.default()[1]
region = "us-east5"
gv = AnthropicVertex(project_id=project_id, region=region)
client = Client(models_goog[-1], gv)
chat = Chat(cli=client)
chat("I'm Jeremy")

Footnotes

  1. https://www.nbcsandiego.com/weather/todays-san-diego-forecast/152395/ “Early dense fog will continue through about mid-morning today with skies clearing to mostly sunny to partly cloudy.”↩︎

  2. https://www.nbcsandiego.com/weather/todays-san-diego-forecast/152395/ “Early dense fog will continue through about mid-morning today with skies clearing to mostly sunny to partly cloudy.”↩︎

  3. https://www.nbcsandiego.com/weather/todays-san-diego-forecast/152395/ “Today will be slightly cooler than yesterday but still unseasonably warm.”↩︎

  4. https://www.nbcsandiego.com/weather/todays-san-diego-forecast/152395/ “Today will be slightly cooler than yesterday but still unseasonably warm.”↩︎

  5. https://www.wunderground.com/weather/us/ca/san-diego “zoom out · Showing Stations · access_time 5:45 AM PDT on May 22, 2025 (GMT -7) | Updated 3 seconds ago · 71° | 60° · 63 °F · like 63° · Cloudy · N · 2…”↩︎

  6. https://www.wunderground.com/weather/us/ca/san-diego “/ 0.00 °in Foggy this morning, then partly cloudy this afternoon. High 71F. Winds SW at 5 to 10 mph.”↩︎

  7. https://www.wunderground.com/weather/us/ca/san-diego “/ 0.00 °in Mostly cloudy. Expect mist and reduced visibilities at times. Low near 60F. Winds S at 5 to 10 mph.”↩︎

  8. https://www.nbcsandiego.com/weather/todays-san-diego-forecast/152395/ “We continue to cool down more heading into Friday and the weekend with the return of more clouds and onshore winds.”↩︎

  9. https://www.nbcsandiego.com/weather/todays-san-diego-forecast/152395/ “The weekend does look to remain dry with no big storms impacting Southern California anytime soon.”↩︎